专为单药加固学习(RL)设计的算法通常无法在两人零和零和游戏中收敛到平衡。相反,在2P0S游戏中近似NASH和量子响应平衡(QRE)的游戏理论算法通常对RL竞争,并且很难扩展。结果,这两种情况的算法通常是分别开发和评估的。在这项工作中,我们表明,单个算法是一种近端正则化的镜像下降的简单扩展,我们称之为磁性镜下降(MMD) - 尽管它们的基本差异都可以在两种情况下产生强大的结果。从理论的角度来看,我们证明了MMD在广泛的游戏中线性收敛到QRE-这是第一阶求解器首次证明线性收敛。此外,我们通过自我播放作为表格NASH均衡求解器应用,我们从经验上表明,MMD在正常形式和广泛的形式游戏中都具有全反馈(这是标准RL算法首次完成),在正常形式和广泛的形式游戏中产生竞争性竞争因此)以及MMD在黑盒反馈设置中经验收敛。此外,对于单人Deep RL,在一小部分Atari和Mujoco游戏中,我们表明MMD可以与PPO的结果竞争。最后,对于多代理Deep RL,我们显示MMD可以在3x3突然的黑暗中胜过NFSP。
translated by 谷歌翻译
我们调查随机镜面下降(SMD)的趋同相对光滑和平滑凸优化。在相对平滑的凸优化中,我们为SMD提供了新的收敛保证,并持续步骤。对于平滑的凸优化,我们提出了一种新的自适应步骤方案 - 镜子随机Polyak Spectize(MSP)。值得注意的是,我们的收敛导致两个设置都不会使有界渐变假设或有界方差假设,并且我们向邻域显示在插值下消失的邻居的融合。MSP概括了最近提出的随机Polyak Spectize(SPS)(Loizou等,2021)以镜子血液镜子,并且在继承镜子血清的好处的同时,现代机器学习应用仍然是实用和高效的。我们将我们的结果与各种监督的学习任务和SMD的不同实例相结合,展示了MSP的有效性。
translated by 谷歌翻译
事后观察合理性是一种玩一般游戏的方法,该游戏规定了针对一组偏差的单个代理的无重格学习动态,并进一步描述了具有介导的平衡的多个代理商之间的共同理性行为。为了在依次的决策设置中发展事后理性学习,我们将行为偏差形式化为一般偏差,尊重广泛形式游戏的结构。将时间选择的概念整合到反事实遗憾的最小化(CFR)中,我们介绍了广泛的遗憾最小化(EFR)算法,该算法对于任何给定的行为偏差都具有与集合的复杂性紧密相关的计算相关的行为偏差。我们识别行为偏差子集,部分序列偏差类型,这些类型还包含先前研究的类型并导致长度中等的游戏中有效的EFR实例。此外,我们对基准游戏中不同偏差类型实例化的EFR进行了彻底的经验分析,我们发现更强大的类型通常会引起更好的性能。
translated by 谷歌翻译
在最近在两人,零和游戏中取得成功的驱动下,人工智能在游戏中的工作越来越重视产生基于平衡策略的算法。但是,这种方法在培养通用游戏或两个以上玩家的能力的玩家中的效果较小,而不是在两人游戏中的零和零游戏中。一个有吸引力的替代方法是考虑自适应算法,以确保相对于修改行为可以实现的方面的强劲表现。这种方法还导致了游戏理论分析,但是在关节学习动力学而不是均衡的代理行为引起的相关性游戏中。我们在一般的顺序决策环境中发展并倡导这一对学习的事后理性理性框架。为此,我们在广泛的游戏中重新检查了介导的平衡和偏差类型,从而获得了更完整的理解和解决过去的误解。我们提出了一组示例,说明了文献中每种平衡的独特优势和劣势,并证明没有可牵引的概念可以包含所有其他概念。这一探究线在与反事实遗憾最小化(CFR)家族中算法相对应的偏差和平衡类的定义中达到顶点,将它们与文献中的所有其他人联系起来。更详细地研究CFR进一步导致相关游戏中合理性的新递归定义,该定义以自然适用于后代评估的方式扩展了顺序合理性。
translated by 谷歌翻译
We consider the problem of estimating a multivariate function $f_0$ of bounded variation (BV), from noisy observations $y_i = f_0(x_i) + z_i$ made at random design points $x_i \in \mathbb{R}^d$, $i=1,\ldots,n$. We study an estimator that forms the Voronoi diagram of the design points, and then solves an optimization problem that regularizes according to a certain discrete notion of total variation (TV): the sum of weighted absolute differences of parameters $\theta_i,\theta_j$ (which estimate the function values $f_0(x_i),f_0(x_j)$) at all neighboring cells $i,j$ in the Voronoi diagram. This is seen to be equivalent to a variational optimization problem that regularizes according to the usual continuum (measure-theoretic) notion of TV, once we restrict the domain to functions that are piecewise constant over the Voronoi diagram. The regression estimator under consideration hence performs (shrunken) local averaging over adaptively formed unions of Voronoi cells, and we refer to it as the Voronoigram, following the ideas in Koenker (2005), and drawing inspiration from Tukey's regressogram (Tukey, 1961). Our contributions in this paper span both the conceptual and theoretical frontiers: we discuss some of the unique properties of the Voronoigram in comparison to TV-regularized estimators that use other graph-based discretizations; we derive the asymptotic limit of the Voronoi TV functional; and we prove that the Voronoigram is minimax rate optimal (up to log factors) for estimating BV functions that are essentially bounded.
translated by 谷歌翻译
In this work, we introduce a hypergraph representation learning framework called Hypergraph Neural Networks (HNN) that jointly learns hyperedge embeddings along with a set of hyperedge-dependent embeddings for each node in the hypergraph. HNN derives multiple embeddings per node in the hypergraph where each embedding for a node is dependent on a specific hyperedge of that node. Notably, HNN is accurate, data-efficient, flexible with many interchangeable components, and useful for a wide range of hypergraph learning tasks. We evaluate the effectiveness of the HNN framework for hyperedge prediction and hypergraph node classification. We find that HNN achieves an overall mean gain of 7.72% and 11.37% across all baseline models and graphs for hyperedge prediction and hypergraph node classification, respectively.
translated by 谷歌翻译
Graph Neural Networks (GNNs) have become increasingly important in recent years due to their state-of-the-art performance on many important downstream applications. Existing GNNs have mostly focused on learning a single node representation, despite that a node often exhibits polysemous behavior in different contexts. In this work, we develop a persona-based graph neural network framework called PersonaSAGE that learns multiple persona-based embeddings for each node in the graph. Such disentangled representations are more interpretable and useful than a single embedding. Furthermore, PersonaSAGE learns the appropriate set of persona embeddings for each node in the graph, and every node can have a different number of assigned persona embeddings. The framework is flexible enough and the general design helps in the wide applicability of the learned embeddings to suit the domain. We utilize publicly available benchmark datasets to evaluate our approach and against a variety of baselines. The experiments demonstrate the effectiveness of PersonaSAGE for a variety of important tasks including link prediction where we achieve an average gain of 15% while remaining competitive for node classification. Finally, we also demonstrate the utility of PersonaSAGE with a case study for personalized recommendation of different entity types in a data management platform.
translated by 谷歌翻译
Traditionally, data analysis and theory have been viewed as separate disciplines, each feeding into fundamentally different types of models. Modern deep learning technology is beginning to unify these two disciplines and will produce a new class of predictively powerful space weather models that combine the physical insights gained by data and theory. We call on NASA to invest in the research and infrastructure necessary for the heliophysics' community to take advantage of these advances.
translated by 谷歌翻译
Learning fair graph representations for downstream applications is becoming increasingly important, but existing work has mostly focused on improving fairness at the global level by either modifying the graph structure or objective function without taking into account the local neighborhood of a node. In this work, we formally introduce the notion of neighborhood fairness and develop a computational framework for learning such locally fair embeddings. We argue that the notion of neighborhood fairness is more appropriate since GNN-based models operate at the local neighborhood level of a node. Our neighborhood fairness framework has two main components that are flexible for learning fair graph representations from arbitrary data: the first aims to construct fair neighborhoods for any arbitrary node in a graph and the second enables adaption of these fair neighborhoods to better capture certain application or data-dependent constraints, such as allowing neighborhoods to be more biased towards certain attributes or neighbors in the graph.Furthermore, while link prediction has been extensively studied, we are the first to investigate the graph representation learning task of fair link classification. We demonstrate the effectiveness of the proposed neighborhood fairness framework for a variety of graph machine learning tasks including fair link prediction, link classification, and learning fair graph embeddings. Notably, our approach achieves not only better fairness but also increases the accuracy in the majority of cases across a wide variety of graphs, problem settings, and metrics.
translated by 谷歌翻译
We introduce a language generation task grounded in a popular video game environment. KNUDGE (KNowledge Constrained User-NPC Dialogue GEneration) involves generating dialogue trees conditioned on an ontology captured in natural language passages providing quest and entity specifications. KNUDGE is constructed from side quest dialogues drawn directly from game data of Obsidian Entertainment's The Outer Worlds, leading to real-world complexities in generation: (1) dialogues are branching trees as opposed to linear chains of utterances; (2) utterances must remain faithful to the game lore--character personas, backstories, and entity relationships; and (3) a dialogue must accurately reveal new quest-related details to the human player. We report results for supervised and in-context learning techniques, finding there is significant room for future work on creating realistic game-quality dialogues.
translated by 谷歌翻译